Monte Carlo method
Table of Contents
- 1. Monte Carlo method
- 1.1. Contents
- 1.2. Overview
- 1.3. History
- 1.4. Definitions
- 1.5. Monte Carlo and random numbers
- 1.6. Monte Carlo simulation versus "what if" scenarios
- 1.7. Applications
- 1.8. Physical sciences
- 1.9. Engineering
- 1.10. Climate change and radiative forcing
- 1.11. Computational biology
- 1.12. Computer graphics
- 1.13. Applied statistics
- 1.14. Artificial intelligence for games
- 1.15. Design and visuals
- 1.16. Search and rescue
- 1.17. Finance and business
- 1.18. Law
- 1.19. Use in mathematics
- 1.20. Integration
- 1.21. Simulation and optimization
- 1.22. Inverse problems
- 1.23. Philosophy
1. Monte Carlo method
rely on repeated random sampling to obtain numerical results.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupleddegrees of freedom, such as fluids,
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[2]
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation.
When the probability distribution of the variable is parametrized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler.[3][4][5] The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution.[6][7] By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation).[8][9]
1.1. Contents
1.2. Overview
If the points are not uniformly distributed, then the approximation will be poor.
1.3. History
In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory.
Being secret, the work of von Neumann and Ulam required a code name.[14] A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
The theory of more sophisticated mean field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics.
1.4. Definitions
1.5. Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
1.6. Monte Carlo simulation versus "what if" scenarios
1.7. Applications
1.8. Physical sciences
1.9. Engineering
1.10. Climate change and radiative forcing
1.11. Computational biology
1.12. Computer graphics
1.13. Applied statistics
1.14. Artificial intelligence for games
1.15. Design and visuals
1.16. Search and rescue
1.17. Finance and business
1.18. Law
1.19. Use in mathematics
1.20. Integration
By the central limit theorem, this method displays 1 / N {\displaystyle \scriptstyle 1/{\sqrt {N}}} convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
1.21. Simulation and optimization
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization
Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end.
It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference[97] is a comprehensive review of many issues related to simulation and optimization.
1.22. Inverse problems
1.23. Philosophy
Popular exposition of the Monte Carlo Method was conducted by McCracken[100]. Method's general philosophy was discussed by Elishakoff[101] and Grüne-Yanoff and Weirich